Linear separability and human category learning: Revisiting a classic study

نویسندگان

  • Kimery R. Levering
  • Nolan Conaway
  • Kenneth J. Kurtz
چکیده

The ability to acquire non-linearly separable (NLS) classifications is well documented in the study of human category learning. In particular, one experiment (Medin & Schwanenflugel, 1981; E4) is viewed as the canonical demonstration that, when withinand betweencategory similarities are evenly matched, NLS classifications are not more difficult to acquire than linearly separable ones. The results of this study are somewhat at issue due to non-standard methodology and small sample size. We present a replication and extension of this classic experiment. We did not find any evidence of an advantage for linearly separable classifications. In fact, the marginal NLS advantage observed in the original study was strengthened: we found a significant advantage for the NLS classification. These results are discussed with respect to accounts provided by formal models of human classification learning.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Expanding the search for a linear separability constraint on category learning.

Formal models of categorization make different predictions about the theoretical importance of linear separability. Prior research, most of which has failed to find support for a linear separability constraint on category learning, has been conducted using tasks that involve learning two categories with a small number of members. The present experiment used four categories with three or nine pa...

متن کامل

Revisiting Perceptron: Efficient and Label-Optimal Learning of Halfspaces

It has been a long-standing problem to efficiently learn a linear separator using as few labels as possible. In this work, we propose an efficient perceptron-based algorithm for actively learning homogeneous linear separators under uniform distribution. Under bounded noise, where each label is flipped with probability at most η, our algorithm achieves near-optimal Õ (

متن کامل

Linear Separability and Concept Learning: Eyetracking Individual Differences

Asking people whether a tomato is fruit, or whether chess is a sport, reveals differences between individuals’ concepts. How can the same perceptual information result in different representations? Perhaps people’s initial attention to particular category information determines what representations they will form. When first learning about chess, for example, attention to its competitive aspect...

متن کامل

Deep Linear Discriminant Analysis

We introduce Deep Linear Discriminant Analysis (DeepLDA) which learns linearly separable latent representations in an end-to-end fashion. Classic LDA extracts features which preserve class separability and is used for dimensionality reduction for many classification problems. The central idea of this paper is to put LDA on top of a deep neural network. This can be seen as a non-linear extension...

متن کامل

Separability is a Learner’s Best Friend

Geometric separability is a generalisation of linear separability, familiar to many from Minsky and Papert’s analysis of the Perceptron learning method. The concept forms a novel dimension along which to conceptualise learning methods. The present paper shows how geometric separability can be defined and demonstrates that it accurately predicts the performance of a at least one empirical learni...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016